mutual contamination framework
Learning from Label Proportions: A Mutual Contamination Framework
Learning from label proportions (LLP) is a weakly supervised setting for classification in which unlabeled training instances are grouped into bags, and each bag is annotated with the proportion of each class occurring in that bag. Prior work on LLP has yet to establish a consistent learning procedure, nor does there exist a theoretically justified, general purpose training criterion. In this work we address these two issues by posing LLP in terms of mutual contamination models (MCMs), which have recently been applied successfully to study various other weak supervision settings. In the process, we establish several novel technical results for MCMs, including unbiased losses and generalization error bounds under non-iid sampling plans. We also point out the limitations of a common experimental setting for LLP, and propose a new one based on our MCM framework.
Review for NeurIPS paper: Learning from Label Proportions: A Mutual Contamination Framework
The work is strongly based on the results for mutual contamination models, which is specially designed and discussed for binary classification problems. As a result, the benchmark approaches involved in the experimental section are confined in two early proposed methods for binary problems. Many up-to-date models that focus on multi-class LLP problems are not touched by this work. In other words, there is a gap between this work and multi-class LLP problem. In particular, thanks to deep neural networks, the performance on LLP has been greatly improved by recent work, such as the work in [1], [11], [18], [31], and [34], on much complicated datasets, e.g., image data.
Learning from Label Proportions: A Mutual Contamination Framework
Learning from label proportions (LLP) is a weakly supervised setting for classification in which unlabeled training instances are grouped into bags, and each bag is annotated with the proportion of each class occurring in that bag. Prior work on LLP has yet to establish a consistent learning procedure, nor does there exist a theoretically justified, general purpose training criterion. In this work we address these two issues by posing LLP in terms of mutual contamination models (MCMs), which have recently been applied successfully to study various other weak supervision settings. In the process, we establish several novel technical results for MCMs, including unbiased losses and generalization error bounds under non-iid sampling plans. We also point out the limitations of a common experimental setting for LLP, and propose a new one based on our MCM framework.